4 research outputs found
Optimal Control of SOAs With Artificial Intelligence for Sub-Nanosecond Optical Switching
Novel approaches to switching ultra-fast semiconductor optical amplifiers using artificial intelligence algorithms (particle swarm optimisation, ant colony optimisation, and a genetic algorithm) are developed and applied both in simulation and experiment. Effective off-on switching (settling) times of 542 ps are demonstrated with just 4.8% overshoot, achieving an order of magnitude improvement over previous attempts described in the literature and standard dampening techniques from control theory
Optimal and Low Complexity Control of SOA-Based Optical Switching with Particle Swarm Optimisation
We propose a reliable, low-complexity particle swarm optimisation (PSO) approach to control semiconductor optical amplifier (SOA)-based s witches. We experimentally demonstrate less than 610 ps off-on switching (settling) time and less than 2.2% overshoot with 20x lower sampling rate and 8x reduced DAC resolution
AI-optimised tuneable sources for bandwidth-scalable, sub-nanosecond wavelength switching
Wavelength routed optical switching promises low power and latency networking for data centres, but requires a wideband wavelength tuneable source (WTS) capable of sub-nanosecond switching at every node. We propose a hybrid WTS that uses time-interleaved tuneable lasers, each gated by a semiconductor optical amplifier, where the performance of each device is optimised using artificial intelligence. Through simulation and experiment we demonstrate record wavelength switch times below 900 ps across 6.05 THz (122Ă—50 GHz) of continuously tuneable optical bandwidth. A method for further bandwidth scaling is evaluated and compared to alternative designs
Recommended from our members
Techniques for applying reinforcement learning to routing and wavelength assignment problems in optical fiber communication networks
We propose a novel application of reinforcement learning (RL) with invalid action masking and a novel training methodology for routing and wavelength assignment (RWA) in fixed-grid optical networks and demonstrate the generalizability of the learned policy to a realistic traffic matrix unseen during training. Through the introduction of invalid action masking and a new training method, the applicability of RL to RWA in fixed-grid networks is extended from considering connection requests between nodes to servicing demands of a given bit rate, such that lightpaths can be used to service multiple demands subject to capacity constraints. We outline the additional challenges involved for this RWA problem, for which we found that standard RL had low performance compared to that of baseline heuristics, in comparison with the connection requests RWA problem considered in the literature. Thus, we propose invalid action masking and a novel training method to improve the efficacy of the RL agent. With invalid action masking, domain knowledge is embedded in the RL model to constrain the action space of the RL agent to lightpaths that can support the current request, reducing the size of the action space and thus increasing the efficacy of the agent. In the proposed training method, the RL model is trained on a simplified version of the problem and evaluated on the target RWA problem, increasing the efficacy of the agent compared with training directly on the target problem. RL with invalid action masking and this training method outperforms standard RL and three state-of-the-art heuristics, namely,
k
shortest path first fit, first-fit
k
shortest path, and
k
shortest path most utilized, consistently across uniform and nonuniform traffic in terms of the number of accepted transmission requests for two real-world core topologies, NSFNET and COST–239. The RWA runtime of the proposed RL model is comparable to that of these heuristic approaches, demonstrating the potential for real-world applicability. Moreover, we show that the RL agent trained on uniform traffic is able to generalize well to a realistic nonuniform traffic distribution not seen during training, thus outperforming the heuristics for this traffic. Visualization of the learned RWA policy reveals an RWA strategy that differs significantly from those of the heuristic baselines in terms of the distribution of services across channels and the distribution across links.</jats:p